Existing 3D-aware image synthesis approaches mainly focus on generating a single canonical object and show limited capacity in composing a complex scene containing a variety of objects. This work presents DisCoScene: a 3Daware generative model for high-quality and controllable scene synthesis. The key ingredient of our method is a very abstract object-level representation (i.e., 3D bounding boxes without semantic annotation) as the scene layout prior, which is simple to obtain, general to describe various scene contents, and yet informative to disentangle objects and background. Moreover, it serves as an intuitive user control for scene editing. Based on such a prior, the proposed model spatially disentangles the whole scene into object-centric generative radiance fields by learning on only 2D images with the global-local discrimination. Our model obtains the generation fidelity and editing flexibility of individual objects while being able to efficiently compose objects and the background into a complete scene. We demonstrate state-of-the-art performance on many scene datasets, including the challenging Waymo outdoor dataset. Project page: https://snap-research.github.io/discoscene/
translated by 谷歌翻译
Video generation requires synthesizing consistent and persistent frames with dynamic content over time. This work investigates modeling the temporal relations for composing video with arbitrary length, from a few frames to even infinite, using generative adversarial networks (GANs). First, towards composing adjacent frames, we show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, brings a smooth frame transition without compromising the per-frame quality. Second, by incorporating the temporal shift module (TSM), originally designed for video understanding, into the discriminator, we manage to advance the generator in synthesizing more consistent dynamics. Third, we develop a novel B-Spline based motion representation to ensure temporal smoothness to achieve infinite-length video generation. It can go beyond the frame number used in training. A low-rank temporal modulation is also proposed to alleviate repeating contents for long video generation. We evaluate our approach on various datasets and show substantial improvements over video generation baselines. Code and models will be publicly available at https://genforce.github.io/StyleSV.
translated by 谷歌翻译
Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement.
translated by 谷歌翻译
通过区分真实和合成样品,鉴别器在训练生成对抗网络(GAN)中起着至关重要的作用。尽管实际数据分布保持不变,但由于发电机的发展,合成分布一直变化,从而影响鉴别器的BI分类任务的相应变化。我们认为,对其容量进行即时调整的歧视者可以更好地适应这种时间变化的任务。一项全面的实证研究证实,所提出的培训策略称为Dynamicd,改善了合成性能,而不会产生任何其他计算成本或培训目标。在不同的数据制度下开发了两个容量调整方案,用于培训gan:i)给定足够数量的培训数据,歧视者从逐渐增加的学习能力中受益,ii)ii)当培训数据受到限制时,逐渐减少层宽度的宽度减轻。歧视者的过度问题。在一系列数据集上进行的2D和3D感知图像合成任务的实验证实了我们的动力学的普遍性以及对基准的实质性改进。此外,Dynamicd与其他歧视器改进方法(包括数据增强,正规化器和预训练)具有协同作用,并且在将学习gans合并时会带来连续的性能增长。
translated by 谷歌翻译
无监督的域适应性(UDA)旨在使在标记的源域上训练的模型适应未标记的目标域。在本文中,我们提出了典型的对比度适应(PROCA),这是一种无监督域自适应语义分割的简单有效的对比度学习方法。以前的域适应方法仅考虑跨各个域的阶级内表示分布的对齐,而阶层间结构关系的探索不足,从而导致目标域上的对齐表示可能不像在源上歧视的那样容易歧视。域了。取而代之的是,ProCA将类间信息纳入班级原型,并采用以班级为中心的分布对齐进行适应。通过将同一类原型与阳性和其他类原型视为实现以集体为中心的分配对齐方式的负面原型,Proca在经典领域适应任务上实现了最先进的性能,{\ em i.e. text {and} synthia $ \ to $ cityScapes}。代码可在\ href {https://github.com/jiangzhengkai/proca} {proca}获得代码
translated by 谷歌翻译
制作生成模型3D感知桥梁2D图像空间和3D物理世界仍然挑战。最近尝试用神经辐射场(NERF)配备生成的对抗性网络(GAN),其将3D坐标映射到像素值,作为3D之前。然而,nerf中的隐式功能具有一个非常局部的接收领域,使得发电机难以意识到全局结构。与此同时,NERF建立在体积渲染上,这可能太昂贵,无法产生高分辨率结果,提高优化难度。为了减轻这两个问题,我们通过明确学习结构表示和纹理表示,向高保真3D感知图像综合提出了一种作为Volumegan称为Volumegan的新颖框架。我们首先学习一个特征卷来表示底层结构,然后使用类似NERF的模型转换为特征字段。特征字段进一步累积到作为纹理表示的2D特征图中,然后是用于外观合成的神经渲染器。这种设计使得能够独立控制形状和外观。广泛的数据集的大量实验表明,我们的方法比以前的方法实现了足够更高的图像质量和更好的3D控制。
translated by 谷歌翻译
由于数据注释的高成本,半监督行动识别是一个具有挑战性的,但重要的任务是。这个问题的常见方法是用伪标签分配未标记的数据,然后将其作为训练中的额外监督。通常在最近的工作中,通过在标记数据上训练模型来获得伪标签,然后使用模型的自信预测来教授自己。在这项工作中,我们提出了一种更有效的伪标签方案,称为跨模型伪标记(CMPL)。具体地,除了主要骨干内,我们还介绍轻量级辅助网络,并要求他们互相预测伪标签。我们观察到,由于其不同的结构偏差,这两种模型倾向于学习来自同一视频剪辑的互补表示。因此,通过利用跨模型预测作为监督,每个模型都可以受益于其对应物。对不同数据分区协议的实验表明我们对现有替代方案框架的重大改进。例如,CMPL在Kinetics-400和UCF-101上实现了17.6 \%$ 17.6 \%$ 25.1 \%$ 25.使用RGB模态和1 \%$标签数据,优于我们的基线模型,FIXMATCT,以$ 9.0 \% $和10.3美元\%$。
translated by 谷歌翻译
生成的对抗性网络(GANS)的成功基本上基于发电机(G)和鉴别者(D)之间的对抗训练。预计它们将达到一定的平衡,其中D不能将生成的图像与真实的图像区分开来。但是,在实践中,难以在GaN训练中实现如此平衡,而是几乎总是超过G.我们将这种现象归因于D和G之间的信息不对称。具体而言,我们观察到确定时的视觉注意力图像是真实还是假的,但G没有明确的线索,在哪个区域专注于特定合成。为了缓解D质量在GAN中竞争的问题,我们的目的是提高G的空间意识。随机采样的多级热手表被编码为G作为感应偏压的中间层。因此,G可以有目的地改善某些图像区域的合成。我们进一步建议将G的空间意识与D.通过这种方式对准G.通过这种方式,我们有效地减少了D和G之间的信息差距。广泛的结果表明,我们的方法将两位玩家游戏推动到均衡的GANS中的两个玩家游戏,导致综合性能更好。作为副产品,引入的空间意识有助于在输出合成上进行交互式编辑。演示视频和更多结果在https://genforce.github.io/eqgan/处。
translated by 谷歌翻译
这项工作旨在将在一个图像域上预先训练的生成的对抗网络(GaN)转移到新域名,其仅仅是只有一个目标图像。主要挑战是,在有限的监督下,综合照片现实和高度多样化的图像非常困难,同时获取目标的代表性。不同于采用Vanilla微调策略的现有方法,我们分别将两个轻量级模块导入发电机和鉴别器。具体地,我们将属性适配器引入发电机中冻结其原始参数,通过该参数,它可以通过其重复利用现有知识,因此保持合成质量和多样性。然后,我们用一个属性分类器装备了学习良好的鉴别器骨干,以确保生成器从引用中捕获相应的字符。此外,考虑到培训数据的多样性差(即,只有一个图像),我们建议在培训过程中建议在生成域中的多样性限制,减轻优化难度。我们的方法在各种环境下提出了吸引力的结果,基本上超越了最先进的替代方案,特别是在合成多样性方面。明显的是,我们的方法即使具有大域间隙,并且在几分钟内为每个实验提供鲁棒地收敛。
translated by 谷歌翻译
Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a random code to a photo-realistic image. In this work, we propose a framework called InterFaceGAN to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space. We first find that GANs learn various semantics in some linear subspaces of the latent space. After identifying these subspaces, we can realistically manipulate the corresponding facial attributes without retraining the model. We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection, resulting in more precise control of the attribute manipulation. Besides manipulating the gender, age, expression, and presence of eyeglasses, we can even alter the face pose and fix the artifacts accidentally made by GANs. Furthermore, we perform an in-depth face identity analysis and a layer-wise analysis to evaluate the editing results quantitatively. Finally, we apply our approach to real face editing by employing GAN inversion approaches and explicitly training feed-forward models based on the synthetic data established by InterFaceGAN. Extensive experimental results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable face representation.
translated by 谷歌翻译